Goto

Collaborating Authors

 Liaoning Province







Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models Ziyi Yin 1 Muchao Y e

Neural Information Processing Systems

Vision-Language (VL) pre-trained models have shown their superiority on many multimodal tasks. However, the adversarial robustness of such models has not been fully explored. Existing approaches mainly focus on exploring the adversarial robustness under the white-box setting, which is unrealistic. In this paper, we aim to investigate a new yet practical task to craft image and text perturbations using pre-trained VL models to attack black-box fine-tuned models on different downstream tasks.


HQA-Attack: Toward High Quality Black-Box Hard-Label Adversarial Attack on Text

Neural Information Processing Systems

To alleviate the above issues, we propose a simple yet effective framework for producing H igh Q uality black-box hard-label A dversarial Attack, named HQA-Attack . The overview of HQA-Attack is shown in Figure 1. By "high quality", it means that the HQA-Attack method can generate